1. https://appdevelopermagazine.com/enterprise
  2. https://appdevelopermagazine.com/how-inmemory-computing-is-driving-digital-transformation-technologies/
8/7/2017 2:31:06 PM
How inmemory computing is driving digital transformation technologies
In-memory Computing,Digital Transformation Initiatives,Open Source Software,Data-Driven Software
https://news-cdn.moonbeam.co/How-In-Memory-Computing-is-Driving-Digital-Transformation-Technologies-App-Developer-Magazine_ywo2vpy5.jpg
App Developer Magazine
How inmemory computing is driving digital transformation technologies

Enterprise

How inmemory computing is driving digital transformation technologies


Monday, August 7, 2017

Nikita Ivanov Nikita Ivanov

Digital transformation initiatives are turning to in-memory computing to power their latest technologies.

It increasingly seems that every business wants to become a data-driven software company. The success of Airbnb, Alibaba, Netflix and many others has CEOs, CIOs, and CDOs jumping on the digital transformation bandwagon and imagining all the possible ways they can leverage their intellectual property and unique data to deliver a service instead of just shipping products. A builder of engine parts can deliver real-time monitoring of the health of installed parts. A manufacturer of computer printers can monitor ink levels and automatically ship refills. A maker of sprinkler system timers can monitor weather and soil conditions in order to optimize water usage.

No matter how great the idea, however, executing it often involves significant challenges. This can especially be the case when customer growth leads to ever-growing amounts of data that needs to be collected and analyzed in real time. The failure to plan ahead for cost-effectively scalability can be a critical threat to the business model, leading inevitably to customer frustration and churn. The best way to avoid this dilemma is to deploy a scalable, next generation architecture which can grow seamlessly to meet expanding need. And a smart way to cost-effectively deploy a scalable architecture is by building an infrastructure that uses open source software and commodity hardware.

The strategy that many companies have found to achieve this may surprise you. It’s in-memory computing. Many system designers still believe that in-memory computing is too expensive for most use cases, but this is no longer true. With the steady decline in costs, memory is now only slightly more expensive than disk-based storage. And next generation, memory-centric platforms can future proof today’s solutions against tomorrow’ challenges. By eliminating latency and dramatically improving application performance, today’s leading open source in-memory computing platforms offer an exceptional value proposition and can be considered for almost any type of digital transformation initiative. And industry leading, tiered-memory solutions can ensure that the scale as well as the performance of the system can be easily controlled far into the future while allowing users to take advantage of any of a number of storage technologies including spinning disks, solid state drives (SSDs), Flash, or 3D XPoint.

Inserted between the application and data layers, in-memory computing platforms support massive parallel processing across a highly available, distributed computing cluster with ACID transaction support. This enables simultaneously transacting and analyzing huge amounts of data in real-time - a key requirement of most digital transformation projects.

In a typical application deployment, the underlying RDBMS, NoSQL or Apache Hadoop database is kept in the RAM of the distributed cluster built with commodity hardware. Keeping the data in RAM provides a significant performance boost. Further, leading in-memory computing platforms make it easy to scale - another requirement of digital transformation projects - by automatically utilizing the RAM of new nodes added to the cluster and rebalancing the dataset across the nodes, which also ensures high availability.

The latest generation of open source in-memory computing platforms has introduced new memory-centric architectures that provide the optional ability to leverage additional data management capabilities. In new persistent storage architectures, the full dataset can be maintained on disk while a subset of the data is kept in various tiered memory layers which trade off cost and performance. Transaction processing and analytics can be performed across the entire dataset, no matter whether the relevant data is in-memory or only on disk. This new strategy enables organizations to establish the exact balance they want between performance and cost while obtaining all the benefits of a distributed, transactional SQL database that can be scaled out across thousands of servers.

In-memory computing platforms can also support hybrid transactional/analytical processing (HTAP) use cases. HTAP can be especially relevant for Internet of Things (IoT) applications requiring real-time analysis of sensor and other external data sources. HTAP provides the ability to perform analytics in real-time on the operational dataset in a single unified OLTP and OLAP environment without impacting system performance.

In-Memory Computing in Action


Today’s in-memory computing platforms are typically able to deliver an increase of 1,000x or more in OLTP and OLAP processing speeds compared to legacy applications built on disk-based databases. In many cases, these in-memory computing platforms can support hundreds of millions of transactions per second.

Sberbank, the largest bank in Russia and Eastern Europe, is taking it even further. As part of its digital transformation initiative, which is being driven by a 10-100x increase in concurrent transactions due to rapidly increasing mobile banking volumes, the bank is migrating to an open source in-memory computing architecture that can lower costs while providing higher levels of performance, reliability and scalability than previous generations of infrastructure.

Misys, a financial technology software provider with over 2,000 clients, including 48 of the world’s 50 largest banks, wanted to ensure its ability to manage huge amounts of trading and accounting data, high-speed transactions and real-time reporting. The firm also wanted to launch a new SaaS-based service, FusionFabric Connect, which includes a collection of modules that integrate many trading systems. Misys built an in-memory computing platform using commodity servers, each with 256GB RAM, and deployed the parallel processing cluster between its application and database layers. The open source in-memory computing platform eliminated all processing bottlenecks, enabling Misys to move forward with its digital transformation initiatives with confidence.

In-Memory Computing Platform Best-Practice Capabilities


- In-memory data grid: Inserted between the application and database layers to cache disk-based data from RDBMS, NoSQL and Hadoop databases. For reliability and easy scalability, ensure new nodes can be added to the cluster and data caches are automatically replicated and partitioned across multiple nodes. For maximum flexibility, look for a data grid that offers ACID-compliance and support for all popular databases.

- Distributed SQL: Supplements or even helps replace a disk-based RDBMS. For ease of use, the system should use ODBC and JDBC APIs for communication and should require minimal custom coding. The solution should also be horizontally-scalable, fault-tolerant and ANSI SQL-99 compliant. For maximum utility, it should support all DDL and DML commands. Support for geospatial data may also be useful.

- In-memory compute grid: Enables distributed parallel processing of resource-intensive compute tasks. Look for adaptive load balancing, automatic fault tolerance, linear scalability and custom scheduling. For maximum flexibility, look for a grid built around a pluggable service provider interface (SPI) to offer a direct API for Fork-Join and MapReduce processing.

- In-memory service grid: Provides control over the services deployed on the cluster nodes and guarantees continuous availability of the services when a node fails. The service grid should be able to automatically deploy services on node startup, deploy multiple instances of a service, and terminate a deployed service.

- In-memory streaming and continuous event processing: Establishes windows for processing and runs one-time or continuous queries against these windows. For flexibility, ensure customizable workflows and the ability to index data as it is being streamed to enable extremely fast distributed SQL queries against the streaming data.

- In-memory Apache Hadoop acceleration: Offers easy-to-use extensions to the disk-based Hadoop Distributed File System (HDFS) and traditional MapReduce. These enable the in-memory computing platform to be used as a caching layer for HDFS, offering read-through and write-through, while the compute grid runs MapReduce in memory. These extensions can deliver up to 10 times faster performance.

Conclusion


Digital transformation initiatives are changing the way companies do business and interact with their customers. But without a high performance computing environment, companies won’t be able to deliver the required service quality at scale, sabotaging the vision of the new business model. Today’s top open source in-memory computing platforms can deliver the required high performance and cost-effective scalability, supporting faster growth and driving greater innovation. By deploying the newest generation of in-memory computing platforms, companies can future proof their infrastructure and allow themselves to leverage upcoming memory and data storage innovations.

This content is made possible by a guest author, or sponsor; it is not written by and does not necessarily reflect the views of App Developer Magazine's editorial staff.

Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • - Exclusive content from leaders in the industry
  • - Q&A articles from industry leaders
  • - Tips and tricks from the most successful developers weekly
  • - Monthly issues, including all 90+ back-issues since 2012
  • - Event discounts and early-bird signups
  • - Gain insight from top achievers in the app store
  • - Learn what tools to use, what SDK's to use, and more

    Subscribe here